synthetic pattern
Recognition of Unseen Combined Motions via Convex Combination-based EMG Pattern Synthesis for Myoelectric Control
Yazawa, Itsuki, Yoneda, Seitaro, Furui, Akira
Electromyogram (EMG) signals recorded from the skin surface enable intuitive control of assistive devices such as prosthetic limbs. However, in EMG-based motion recognition, collecting comprehensive training data for all target motions remains challenging, particularly for complex combined motions. This paper proposes a method to efficiently recognize combined motions using synthetic EMG data generated through convex combinations of basic motion patterns. Instead of measuring all possible combined motions, the proposed method utilizes measured basic motion data along with synthetically combined motion data for training. This approach expands the range of recognizable combined motions while minimizing the required training data collection. We evaluated the effectiveness of the proposed method through an upper limb motion classification experiment with eight subjects. The experimental results demonstrated that the proposed method improved the classification accuracy for unseen combined motions by approximately 17%.
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.05)
- North America > United States (0.04)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.46)
Pre-training with Synthetic Patterns for Audio
Ishikawa, Yuchi, Komatsu, Tatsuya, Aoki, Yoshimitsu
In this paper, we propose to pre-train audio encoders using synthetic patterns instead of real audio data. Our proposed framework consists of two key elements. The first one is Masked Autoencoder (MAE), a self-supervised learning framework that learns from reconstructing data from randomly masked counterparts. MAEs tend to focus on low-level information such as visual patterns and regularities within data. Therefore, it is unimportant what is portrayed in the input, whether it be images, audio mel-spectrograms, or even synthetic patterns. This leads to the second key element, which is synthetic data. Synthetic data, unlike real audio, is free from privacy and licensing infringement issues. By combining MAEs and synthetic patterns, our framework enables the model to learn generalized feature representations without real data, while addressing the issues related to real audio. To evaluate the efficacy of our framework, we conduct extensive experiments across a total of 13 audio tasks and 17 synthetic datasets. The experiments provide insights into which types of synthetic patterns are effective for audio. Our results demonstrate that our framework achieves performance comparable to models pre-trained on AudioSet-2M and partially outperforms image-based pre-training methods.
Deepfake Detection without Deepfakes: Generalization via Synthetic Frequency Patterns Injection
Coccomini, Davide Alessandro, Caldelli, Roberto, Gennaro, Claudio, Fiameni, Giuseppe, Amato, Giuseppe, Falchi, Fabrizio
Deepfake detectors are typically trained on large sets of pristine and generated images, resulting in limited generalization capacity; they excel at identifying deepfakes created through methods encountered during training but struggle with those generated by unknown techniques. This paper introduces a learning approach aimed at significantly enhancing the generalization capabilities of deepfake detectors. Our method takes inspiration from the unique "fingerprints" that image generation processes consistently introduce into the frequency domain. These fingerprints manifest as structured and distinctly recognizable frequency patterns. We propose to train detectors using only pristine images injecting in part of them crafted frequency patterns, simulating the effects of various deepfake generation techniques without being specific to any. These synthetic patterns are based on generic shapes, grids, or auras. We evaluated our approach using diverse architectures across 25 different generation methods. The models trained with our approach were able to perform state-of-the-art deepfake detection, demonstrating also superior generalization capabilities in comparison with previous methods. Indeed, they are untied to any specific generation technique and can effectively identify deepfakes regardless of how they were made.
- Africa > Central African Republic > Ombella-M'Poko > Bimbo (0.04)
- Europe > Italy > Tuscany > Pisa Province > Pisa (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (2 more...)
On Improving Hotspot Detection Through Synthetic Pattern-Based Database Enhancement
Reddy, Gaurav Rajavendra, Xanthopoulos, Constantinos, Makris, Yiorgos
Continuous technology scaling and the introduction of advanced technology nodes in Integrated Circuit (IC) fabrication is constantly exposing new manufacturability issues. One such issue, stemming from complex interaction between design and process, is the problem of design hotspots. Such hotspots are known to vary from design to design and, ideally, should be predicted early and corrected in the design stage itself, as opposed to relying on the foundry to develop process fixes for every hotspot, which would be intractable. In the past, various efforts have been made to address this issue by using a known database of hotspots as the source of information. The majority of these efforts use either Machine Learning (ML) or Pattern Matching (PM) techniques to identify and predict hotspots in new incoming designs. However, almost all of them suffer from high false-alarm rates, mainly because they are oblivious to the root causes of hotspots. In this work, we seek to address this limitation by using a novel database enhancement approach through synthetic pattern generation based on carefully crafted Design of Experiments (DOEs). Effectiveness of the proposed method against the state-of-the-art is evaluated on a 45nm process using industry-standard tools and designs.
- Asia > India (0.04)
- North America > United States > Texas > Dallas County > Richardson (0.04)
- North America > United States > New York (0.04)
- (3 more...)
- Semiconductors & Electronics (0.49)
- Education > Educational Setting (0.46)